Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Evans, Robin; Shpitser, Ilya (Ed.)We consider the problem of maximizing submodular functions under submodular constraints by formulating the problem in two ways: \SCSKC and \DiffC. Given two submodular functions $$f$$ and $$g$$ where $$f$$ is monotone, the objective of \SCSKC problem is to find a set $$S$$ of size at most $$k$$ that maximizes $f(S)$ under the constraint that $$g(S)\leq \theta$$, for a given value of $$\theta$$. The problem of \DiffC focuses on finding a set $$S$$ of size at most $$k$$ such that $h(S) = f(S)-g(S)$$ is maximized. It is known that these problems are highly inapproximable and do not admit any constant factor multiplicative approximation algorithms unless NP is easy. Known approximation algorithms involve data-dependent approximation factors that are not efficiently computable. We initiate a study of the design of approximation algorithms where the approximation factors are efficiently computable. For the problem of \SCSKC, we prove that the greedy algorithm produces a solution whose value is at least $$(1-1/e)f(\OPT) - A$, where $$A$$ is the data-dependent additive error. For the \DiffC problem, we design an algorithm that uses the \SCSKC greedy algorithm as a subroutine. This algorithm produces a solution whose value is at least $$(1-1/e)h(\OPT)-B$, where $$B$$ is also a data-dependent additive error. A salient feature of our approach is that the additive error terms can be computed efficiently, thus enabling us to ascertain the quality of the solutions produced.more » « less
- 
            We investigate the problems of maximizing k-submodular functions over total size constraints and over individual size constraints. k-submodularity is a generalization of submodularity beyond just picking items of a ground set, instead associating one of k types to chosen items. For sensor selection problems, for instance, this enables modeling of which type of sensor to put at a location, not simply whether to put a sensor or not. We propose and analyze threshold-greedy algorithms for both types of constraints. We prove that our proposed algorithms achieve the best known approximation ratios for both constraint types, up to a user-chosen parameter that balances computational complexity and the approximation ratio, while only using a number of function evaluations that depends linearly (up to poly-logarithmic terms) on the number of elements n, the number of types k, and the inverse of the user chosen parameter. Other algorithms that achieve the best-known deterministic approximation ratios require a number of function evaluations that depend linearly on the budget B, while our methods do not. We empirically demonstrate our algorithms' performance in applications of sensor placement with k types and influence maximization with k topics.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available